Goto

Collaborating Authors

 human intelligence


Will Humanity Be Rendered Obsolete by AI?

Louadi, Mohamed El, Romdhane, Emna Ben

arXiv.org Artificial Intelligence

This article analyzes the existential risks artificial intelligence (AI) poses to humanity, tracing the trajectory from current AI to ultraintelligence. Drawing on Irving J. Good and Nick Bostrom's theoretical work, plus recent publications (AI 2027; If Anyone Builds It, Everyone Dies), it explores AGI and superintelligence. Considering machines' exponentially growing cognitive power and hypothetical IQs, it addresses the ethical and existential implications of an intelligence vastly exceeding humanity's, fundamentally alien. Human extinction may result not from malice, but from uncontrollable, indifferent cognitive superiority.


Natural, Artificial, and Human Intelligences

Pothos, Emmanuel M., Widdows, Dominic

arXiv.org Artificial Intelligence

Human achievement, whether in culture, science, or technology, is unparalleled in the known existence. This achievement is tied to the enormous communities of knowledge, made possible by language: leaving theological content aside, it is very much true that "in the beginning was the word", and that in Western societies, this became particularly identified with the written word. There lies the challenge regarding modern age chatbots: they can 'do' language apparently as well as ourselves and there is a natural question of whether they can be considered intelligent, in the same way as we are or otherwise. Are humans uniquely intelligent? We consider this question in terms of the psychological literature on intelligence, evidence for intelligence in non-human animals, the role of written language in science and technology, progress with artificial intelligence, the history of intelligence testing (for both humans and machines), and the role of embodiment in intelligence. We think that it is increasingly difficult to consider humans uniquely intelligent. There are current limitations in chatbots, e.g., concerning perceptual and social awareness, but much attention is currently devoted to overcoming such limitations.


In Defense of the Turing Test and its Legacy

Gonçalves, Bernardo

arXiv.org Artificial Intelligence

Considering that Turing's original test was co-opted by Weizenbaum and that six of the most common criticisms of the Turing test are unfair to both Turing's argument and the historical development of AI. The Turing test has faced criticism for decades, most recently at the Royal Society event "Celebrating the 75th Anniversary of the Turing Test." The question of the Turing test's significance has intensified with recent advances in large language model technology, which now enable machines to pass it. In this article, I address six of the most common criticisms of the Turing test: The Turing test encourages fooling people; Turing overestimated human intelligence, as people can be easily fooled (the ELIZA effect); The Turing test is not a good benchmark for AI; Turing's 1950 paper is not serious and/or has contradictions; Imitation should not be a goal for AI, and it is also harmful to society; Passing the Turing test teaches nothing about AI. All six criticisms largely derive from Joseph Weizenbaum's influential reinterpretation of the Turing test. The first four fail to withstand a close examination of the internal logic of Turing's 1950 paper, particularly when the paper is situated within its mid-twentieth-century context.


Normality and the Turing Test

Kabbach, Alexandre

arXiv.org Artificial Intelligence

This paper proposes to revisit the Turing test through the concept of normality. Its core argument is that the Turing test is a test of normal intelligence as assessed by a normal judge. First, in the sense that the Turing test targets normal/average rather than exceptional human intelligence, so that successfully passing the test requires machines to "make mistakes" and display imperfect behavior just like normal/average humans. Second, in the sense that the Turing test is a statistical test where judgments of intelligence are never carried out by a single "average" judge (understood as non-expert) but always by a full jury. As such, the notion of "average human interrogator" that Turing talks about in his original paper should be understood primarily as referring to a mathematical abstraction made of the normalized aggregate of individual judgments of multiple judges. Its conclusions are twofold. First, it argues that large language models such as ChatGPT are unlikely to pass the Turing test as those models precisely target exceptional rather than normal/average human intelligence. As such, they constitute models of what it proposes to call artificial smartness rather than artificial intelligence, insofar as they deviate from the original goal of Turing for the modeling of artificial minds. Second, it argues that the objectivization of normal human behavior in the Turing test fails due to the game configuration of the test which ends up objectivizing normative ideals of normal behavior rather than normal behavior per se.


Tech billionaires seem to be doom prepping. Should we all be worried?

BBC News

Tech billionaires seem to be doom prepping. Should we all be worried? Mark Zuckerberg is said to have started work on Koolau Ranch, his sprawling 1,400-acre compound on the Hawaiian island of Kauai, as far back as 2014. It is set to include a shelter, complete with its own energy and food supplies, though the carpenters and electricians working on the site were banned from talking about it by non-disclosure agreements, according to a report by Wired magazine. A six-foot wall blocked the project from view of a nearby road.


Exploring Societal Concerns and Perceptions of AI: A Thematic Analysis through the Lens of Problem-Seeking

Kayembe, Naomi Omeonga wa

arXiv.org Artificial Intelligence

This study introduces a novel conceptual framework distinguishing problem-seeking from problem-solving to clarify the unique features of human intelligence in contrast to AI. Problem-seeking refers to the embodied, emotionally grounded process by which humans identify and set goals, while problem-solving denotes the execution of strategies aimed at achieving such predefined objectives. The framework emphasizes that while AI excels at efficiency and optimization, it lacks the orientation derived from experiential grounding and the embodiment flexibility intrinsic to human cognition. To empirically explore this distinction, the research analyzes metadata from 157 YouTube videos discussing AI. Conducting a thematic analysis combining qualitative insights with keyword-based quantitative metrics, this mixed-methods approach uncovers recurring themes in public discourse, including privacy, job displacement, misinformation, optimism, and ethical concerns. The results reveal a dual sentiment: public fascination with AI's capabilities coexists with anxiety and skepticism about its societal implications. The discussion critiques the orthogonality thesis, which posits that intelligence is separable from goal content, and instead argues that human intelligence integrates goal-setting and goal-pursuit. It underscores the centrality of embodied cognition in human reasoning and highlights how AI's limitations come from its current reliance on computational processing. The study advocates for enhancing emotional and digital literacy to foster responsible AI engagement. It calls for reframing public discourse to recognize AI as a tool that augments -- rather than replaces -- human intelligence. By positioning problem seeking at the core of cognition and as a critical dimension of intelligence, this research offers new perspectives on ethically aligned and human-centered AI development.


DAVID MARCUS: Pope Leo XIV's greatest challenge is already changing the world

FOX News

In Herman Hesse's novel "The Glass Bead Game," published in 1943, a future Europe is controlled by only two powers, the players of that mysterious game that uses math and musicology to utilize all of human historical knowledge, and the Roman Catholic Church. Though the actual rules and playing of the glass bead game are vague in the book, to the modern reader its use of prompts to generate truth from the archive of history looks incredibly similar to artificial intelligence, arguably the greatest challenge the non-fictional Pope Leo, the Roman Catholic Church's new pope, Pope Leo XIV, must navigate. In the course of European history, popes have had enormous influence on the development of science, sometimes in conflict, such as with Galileo and Pope Paul V, but also in vital partnership by creating all of the continent's first universities. Indeed, today's Catholic catechism pronounces that science and faith are complementary not in conflict, it reads in part, "…methodical research in all branches of knowledge, provided it is carried out in a truly scientific manner and does not override moral laws, can never conflict with the faith, because the things of the world and the things of faith derive from the same God." Newly elected Pope Leo XIV appears at the balcony of St. Peter's Basilica at the Vatican on Thursday.


'Don't ask what AI can do for us, ask what it is doing to us': are ChatGPT and co harming human intelligence?

The Guardian

Imagine for a moment you are a child in 1941, sitting the common entrance exam for public schools with nothing but a pencil and paper. You read the following: "Write, for no more than a quarter of an hour, about a British author." Today, most of us wouldn't need 15 minutes to ponder such a question. We'd get the answer instantly by turning to AI tools such as Google Gemini, ChatGPT or Siri. Offloading cognitive effort to artificial intelligence has become second nature, but with mounting evidence that human intelligence is declining, some experts fear this impulse is driving the trend.


Homo Ratiocinator (Reckoning Human)

Communications of the ACM

Homo Sapiens, "wise human" in Latin, is the taxonomic species name for modern humans. But observing the current state of the world and its trajectory, it is hard for me to accept the description "wise." I am not the first to object to the "sapiens" descriptor. The French philosopher Henri-Louis Bergson argued in 1911 that a better term would be Homo Faber, referring to human tool-making ability. This ability goes back to early humans, about three million years ago. Most importantly, human tools got better and better due to innovation and cultural transmission.


The Einstein Test: Towards a Practical Test of a Machine's Ability to Exhibit Superintelligence

Benrimoh, David, Mikus, Nace, Rosenfeld, Ariel

arXiv.org Artificial Intelligence

Creative and disruptive insights (CDIs), such as the development of the theory of relativity, have punctuated human history, marking pivotal shifts in our intellectual trajectory. Recent advancements in artificial intelligence (AI) have sparked debates over whether state of the art models possess the capacity to generate CDIs. We argue that the ability to create CDIs should be regarded as a significant feature of machine superintelligence (SI).To this end, we propose a practical test to evaluate whether an approach to AI targeting SI can yield novel insights of this kind. We propose the Einstein test: given the data available prior to the emergence of a known CDI, can an AI independently reproduce that insight (or one that is formally equivalent)? By achieving such a milestone, a machine can be considered to at least match humanity's past top intellectual achievements, and therefore to have the potential to surpass them.